25 research outputs found

    Reasoning in Description Logic Ontologies for Privacy Management

    Get PDF
    A rise in the number of ontologies that are integrated and distributed in numerous application systems may provide the users to access the ontologies with different privileges and purposes. In this situation, preserving confidential information from possible unauthorized disclosures becomes a critical requirement. For instance, in the clinical sciences, unauthorized disclosures of medical information do not only threaten the system but also, most importantly, the patient data. Motivated by this situation, this thesis initially investigates a privacy problem, called the identity problem, where the identity of (anonymous) objects stored in Description Logic ontologies can be revealed or not. Then, we consider this problem in the context of role-based access control to ontologies and extend it to the problem asking if the identity belongs to a set of known individuals of cardinality smaller than the number k. If it is the case that some confidential information of persons, such as their identity, their relationships or their other properties, can be deduced from an ontology, which implies that some privacy policy is not fulfilled, then one needs to repair this ontology such that the modified one complies with the policies and preserves the information from the original ontology as much as possible. The repair mechanism we provide is called gentle repair and performed via axiom weakening instead of axiom deletion which was commonly used in classical approaches of ontology repair. However, policy compliance itself is not enough if there is a possible attacker that can obtain relevant information from other sources, which together with the modified ontology still violates the privacy policies. Safety property is proposed to alleviate this issue and we investigate this in the context of privacy-preserving ontology publishing. Inference procedures to solve those privacy problems and additional investigations on the complexity of the procedures, as well as the worst-case complexity of the problems, become the main contributions of this thesis.:1. Introduction 1.1 Description Logics 1.2 Detecting Privacy Breaches in Information System 1.3 Repairing Information Systems 1.4 Privacy-Preserving Data Publishing 1.5 Outline and Contribution of the Thesis 2. Preliminaries 2.1 Description Logic ALC 2.1.1 Reasoning in ALC Ontologies 2.1.2 Relationship with First-Order Logic 2.1.3. Fragments of ALC 2.2 Description Logic EL 2.3 The Complexity of Reasoning Problems in DLs 3. The Identity Problem and Its Variants in Description Logic Ontologies 3.1 The Identity Problem 3.1.1 Description Logics with Equality Power 3.1.2 The Complexity of the Identity Problem 3.2 The View-Based Identity Problem 3.3 The k-Hiding Problem 3.3.1 Upper Bounds 3.3.2 Lower Bound 4. Repairing Description Logic Ontologies 4.1 Repairing Ontologies 4.2 Gentle Repairs 4.3 Weakening Relations 4.4 Weakening Relations for EL Axioms 4.4.1 Generalizing the Right-Hand Sides of GCIs 4.4.2 Syntactic Generalizations 4.5 Weakening Relations for ALC Axioms 4.5.1 Generalizations and Specializations in ALC w.r.t. Role Depth 4.5.2 Syntactical Generalizations and Specializations in ALC 5. Privacy-Preserving Ontology Publishing for EL Instance Stores 5.1 Formalizing Sensitive Information in EL Instance Stores 5.2 Computing Optimal Compliant Generalizations 5.3 Computing Optimal Safe^{\exists} Generalizations 5.4 Deciding Optimality^{\exists} in EL Instance Stores 5.5 Characterizing Safety^{\forall} 5.6 Optimal P-safe^{\forall} Generalizations 5.7 Characterizing Safety^{\forall\exists} and Optimality^{\forall\exists} 6. Privacy-Preserving Ontology Publishing for EL ABoxes 6.1 Logical Entailments in EL ABoxes with Anonymous Individuals 6.2 Anonymizing EL ABoxes 6.3 Formalizing Sensitive Information in EL ABoxes 6.4 Compliance and Safety for EL ABoxes 6.5 Optimal Anonymizers 7. Conclusion 7.1 Main Results 7.2 Future Work Bibliograph

    Privacy-Preserving Ontology Publishing for EL Instance Stores: Extended Version

    Get PDF
    We make a first step towards adapting an existing approach for privacypreserving publishing of linked data to Description Logic (DL) ontologies. We consider the case where both the knowledge about individuals and the privacy policies are expressed using concepts of the DL EL, which corresponds to the setting where the ontology is an EL instance store. We introduce the notions of compliance of a concept with a policy and of safety of a concept for a policy, and show how optimal compliant (safe) generalizations of a given EL concept can be computed. In addition, we investigate the complexity of the optimality problem

    Computing Safe Anonymisations of Quantified ABoxes w.r.t. EL Policies: Extended Version

    Get PDF
    In recent work, we have shown how to compute compliant anonymizations of quantified ABoxes w.r.t. EL policies. In this setting, quantified ABoxes can be used to publish information about individuals, some of which are anonymized. The policy is given by concepts of the Description Logic (DL) EL, and compliance means that one cannot derive from the ABox that some non-anonymized individual is an instance of a policy concept. If one assumes that a possible attacker could have additional knowledge about some of the involved non-anonymized individuals, then compliance with a policy is not sufficient. One wants to ensure that the quantified ABox is safe in the sense that none of the secret instance information is revealed, even if the attacker has additional compliant knowledge. In the present paper, we show that safety can be decided in polynomial time, and that the unique optimal safe anonymization of a non-safe quantified ABox can be computed in exponential time, provided that the policy consists of a single EL concept.This is an extended version of an article published in: Proceedings of the 36th ACM/SIGAPP Symposium on Applied Computing (SAC ’21), AC

    Repairing Description Logic Ontologies by Weakening Axioms

    Get PDF
    The classical approach for repairing a Description Logic ontology O in the sense of removing an unwanted consequence α is to delete a minimal number of axioms from O such that the resulting ontology O´ does not have the consequence α. However, the complete deletion of axioms may be too rough, in the sense that it may also remove consequences that are actually wanted. To alleviate this problem, we propose a more gentle way of repair in which axioms are not necessarily deleted, but only weakened. On the one hand, we investigate general properties of this gentle repair method. On the other hand, we propose and analyze concrete approaches for weakening axioms expressed in the Description Logic EL

    Privacy-Preserving Ontology Publishing:: The Case of Quantified ABoxes w.r.t. a Static Cycle-Restricted EL TBox: Extended Version

    Get PDF
    We review our recent work on how to compute optimal repairs, optimal compliant anonymizations, and optimal safe anonymizations of ABoxes containing possibly anonymized individuals. The results can be used both to remove erroneous consequences from a knowledge base and to hide secret information before publication of the knowledge base, while keeping as much as possible of the original information.Updated on August 27, 2021. This is an extended version of an article accepted at DL 2021

    Reasoning in Description Logic Ontologies for Privacy Management

    No full text
    A rise in the number of ontologies that are integrated and distributed in numerous application systems may provide the users to access the ontologies with different privileges and purposes. In this situation, preserving confidential information from possible unauthorized disclosures becomes a critical requirement. For instance, in the clinical sciences, unauthorized disclosures of medical information do not only threaten the system but also, most importantly, the patient data. Motivated by this situation, this thesis initially investigates a privacy problem, called the identity problem, where the identity of (anonymous) objects stored in Description Logic ontologies can be revealed or not. Then, we consider this problem in the context of role-based access control to ontologies and extend it to the problem asking if the identity belongs to a set of known individuals of cardinality smaller than the number k. If it is the case that some confidential information of persons, such as their identity, their relationships or their other properties, can be deduced from an ontology, which implies that some privacy policy is not fulfilled, then one needs to repair this ontology such that the modified one complies with the policies and preserves the information from the original ontology as much as possible. The repair mechanism we provide is called gentle repair and performed via axiom weakening instead of axiom deletion which was commonly used in classical approaches of ontology repair. However, policy compliance itself is not enough if there is a possible attacker that can obtain relevant information from other sources, which together with the modified ontology still violates the privacy policies. Safety property is proposed to alleviate this issue and we investigate this in the context of privacy-preserving ontology publishing. Inference procedures to solve those privacy problems and additional investigations on the complexity of the procedures, as well as the worst-case complexity of the problems, become the main contributions of this thesis.:1. Introduction 1.1 Description Logics 1.2 Detecting Privacy Breaches in Information System 1.3 Repairing Information Systems 1.4 Privacy-Preserving Data Publishing 1.5 Outline and Contribution of the Thesis 2. Preliminaries 2.1 Description Logic ALC 2.1.1 Reasoning in ALC Ontologies 2.1.2 Relationship with First-Order Logic 2.1.3. Fragments of ALC 2.2 Description Logic EL 2.3 The Complexity of Reasoning Problems in DLs 3. The Identity Problem and Its Variants in Description Logic Ontologies 3.1 The Identity Problem 3.1.1 Description Logics with Equality Power 3.1.2 The Complexity of the Identity Problem 3.2 The View-Based Identity Problem 3.3 The k-Hiding Problem 3.3.1 Upper Bounds 3.3.2 Lower Bound 4. Repairing Description Logic Ontologies 4.1 Repairing Ontologies 4.2 Gentle Repairs 4.3 Weakening Relations 4.4 Weakening Relations for EL Axioms 4.4.1 Generalizing the Right-Hand Sides of GCIs 4.4.2 Syntactic Generalizations 4.5 Weakening Relations for ALC Axioms 4.5.1 Generalizations and Specializations in ALC w.r.t. Role Depth 4.5.2 Syntactical Generalizations and Specializations in ALC 5. Privacy-Preserving Ontology Publishing for EL Instance Stores 5.1 Formalizing Sensitive Information in EL Instance Stores 5.2 Computing Optimal Compliant Generalizations 5.3 Computing Optimal Safe^{\exists} Generalizations 5.4 Deciding Optimality^{\exists} in EL Instance Stores 5.5 Characterizing Safety^{\forall} 5.6 Optimal P-safe^{\forall} Generalizations 5.7 Characterizing Safety^{\forall\exists} and Optimality^{\forall\exists} 6. Privacy-Preserving Ontology Publishing for EL ABoxes 6.1 Logical Entailments in EL ABoxes with Anonymous Individuals 6.2 Anonymizing EL ABoxes 6.3 Formalizing Sensitive Information in EL ABoxes 6.4 Compliance and Safety for EL ABoxes 6.5 Optimal Anonymizers 7. Conclusion 7.1 Main Results 7.2 Future Work Bibliograph

    Reasoning in Description Logic Ontologies for Privacy Management

    No full text
    A rise in the number of ontologies that are integrated and distributed in numerous application systems may provide the users to access the ontologies with different privileges and purposes. In this situation, preserving confidential information from possible unauthorized disclosures becomes a critical requirement. For instance, in the clinical sciences, unauthorized disclosures of medical information do not only threaten the system but also, most importantly, the patient data. Motivated by this situation, this thesis initially investigates a privacy problem, called the identity problem, where the identity of (anonymous) objects stored in Description Logic ontologies can be revealed or not. Then, we consider this problem in the context of role-based access control to ontologies and extend it to the problem asking if the identity belongs to a set of known individuals of cardinality smaller than the number k. If it is the case that some confidential information of persons, such as their identity, their relationships or their other properties, can be deduced from an ontology, which implies that some privacy policy is not fulfilled, then one needs to repair this ontology such that the modified one complies with the policies and preserves the information from the original ontology as much as possible. The repair mechanism we provide is called gentle repair and performed via axiom weakening instead of axiom deletion which was commonly used in classical approaches of ontology repair. However, policy compliance itself is not enough if there is a possible attacker that can obtain relevant information from other sources, which together with the modified ontology still violates the privacy policies. Safety property is proposed to alleviate this issue and we investigate this in the context of privacy-preserving ontology publishing. Inference procedures to solve those privacy problems and additional investigations on the complexity of the procedures, as well as the worst-case complexity of the problems, become the main contributions of this thesis.:1. Introduction 1.1 Description Logics 1.2 Detecting Privacy Breaches in Information System 1.3 Repairing Information Systems 1.4 Privacy-Preserving Data Publishing 1.5 Outline and Contribution of the Thesis 2. Preliminaries 2.1 Description Logic ALC 2.1.1 Reasoning in ALC Ontologies 2.1.2 Relationship with First-Order Logic 2.1.3. Fragments of ALC 2.2 Description Logic EL 2.3 The Complexity of Reasoning Problems in DLs 3. The Identity Problem and Its Variants in Description Logic Ontologies 3.1 The Identity Problem 3.1.1 Description Logics with Equality Power 3.1.2 The Complexity of the Identity Problem 3.2 The View-Based Identity Problem 3.3 The k-Hiding Problem 3.3.1 Upper Bounds 3.3.2 Lower Bound 4. Repairing Description Logic Ontologies 4.1 Repairing Ontologies 4.2 Gentle Repairs 4.3 Weakening Relations 4.4 Weakening Relations for EL Axioms 4.4.1 Generalizing the Right-Hand Sides of GCIs 4.4.2 Syntactic Generalizations 4.5 Weakening Relations for ALC Axioms 4.5.1 Generalizations and Specializations in ALC w.r.t. Role Depth 4.5.2 Syntactical Generalizations and Specializations in ALC 5. Privacy-Preserving Ontology Publishing for EL Instance Stores 5.1 Formalizing Sensitive Information in EL Instance Stores 5.2 Computing Optimal Compliant Generalizations 5.3 Computing Optimal Safe^{\exists} Generalizations 5.4 Deciding Optimality^{\exists} in EL Instance Stores 5.5 Characterizing Safety^{\forall} 5.6 Optimal P-safe^{\forall} Generalizations 5.7 Characterizing Safety^{\forall\exists} and Optimality^{\forall\exists} 6. Privacy-Preserving Ontology Publishing for EL ABoxes 6.1 Logical Entailments in EL ABoxes with Anonymous Individuals 6.2 Anonymizing EL ABoxes 6.3 Formalizing Sensitive Information in EL ABoxes 6.4 Compliance and Safety for EL ABoxes 6.5 Optimal Anonymizers 7. Conclusion 7.1 Main Results 7.2 Future Work Bibliograph

    Development of Geospatial Dashboard with Analytic Hierarchy Processing for the Expansion of Branch Office Location

    No full text
    Abstract. One of the considered business opportunities that exploit the geographical aspect is the branch office expansion. However, this activity still has some problems to learn various location criteria, either in quantitative or qualitative form. Moreover, there is a need to analyze those criteria quickly to get a proper location priority that will be expanded. Therefore, this research develops a semantic technology of the form geospatial dashboard using analytic hierarchy processing (AHP) method that can display the distribution of location and its criteria in a map. Furthermore, it recommends the list of top priority locations will be expanded implicitly from the various kinds of data. This research takes a case study in one local company in Indonesia. At the end, this research shows that geospatial dashboard using AHP method is considered capable of producing accurate solution up to 90.48% to determine the priority order of locations to be expanded

    Mixing Description Logics in Privacy-Preserving Ontology Publishing

    Get PDF
    In previous work, we have investigated privacy-preserving publishing of Description Logic (DL) ontologies in a setting where the knowledge about individuals to be published is an EL instance store, and both the privacy policy and the possible background knowledge of an attacker are represented by concepts of the DL EL. We have introduced the notions of compliance of a concept with a policy and of safety of a concept for a policy, and have shown how, in the context mentioned above, optimal compliant (safe) generalizations of a given EL concept can be computed. In the present paper, we consider a modified setting where we assume that the background knowledge of the attacker is given by a DL different from the one in which the knowledge to be published and the safety policies are formulated. In particular, we investigate the situations where the attacker’s knowledge is given by an FL0 or an FLE concept. In both cases, we show how optimal safe generalizations can be computed. Whereas the complexity of this computation is the same (ExpTime) as in our previous results for the case of FL0, it turns out to be actually lower (polynomial) for the more expressive DL FLE
    corecore